Google Wallet Digital Wallets Beyond Financial Transactions June 2024 Banner

AI Regulations: Illinois Tackles Deepfakes and EEOC Taps New AI Chief

AI regulations

With artificial intelligence (AI) rapidly advancing and permeating various aspects of society, lawmakers and government agencies are grappling with the challenges of regulating this technology.

In Illinois, a flurry of bills aimed at addressing AI risks recently passed, while California emerged as a key player in AI regulation at the state level. Meanwhile, the Equal Employment Opportunity Commission (EEOC) has appointed its first chief AI officer, reflecting a growing trend among federal agencies to ensure the safe and responsible development and use of AI. 

Illinois Lawmakers Target AI Risks

Illinois lawmakers have passed a series of bills aimed at addressing the growing concerns surrounding artificial intelligence (AI). Among the 466 measures that cleared both chambers of the General Assembly, several focused on protecting consumers’ rights against deepfakes and combating the potential misuse of AI technology. 

House Bill 4623, backed by state Attorney General Kwame Raoul, would expand current child pornography laws to cover AI-generated content. Supporters of the bill argued that a rapid increase in AI-generated child pornography could hinder law enforcement’s ability to identify real cases. 

Another measure, House Bill 4875, would protect individuals from having their voice, image or likeness duplicated by AI for commercial purposes without their consent. The bill, which passed both chambers unanimously, would allow recording artists and their contractors to seek damages for nonconsensual use of their likeness.

California Emerges as Key Player in AI Regulation 

As AI advances and permeates daily life, there has been a growing need for comprehensive regulation. However, while the federal government has taken some steps to address AI governance, “Congress has still failed to pass any legislation to either narrowly target specific AI risks or broadly ensure the responsible development and deployment of AI systems,” according to the Brookings Institution

As PYMNTS previously reported, California has voted to regulate AI and businesses’ personal data use. In this federal legislative vacuum, states are taking the lead, with California emerging as a key player. 

California is home to 32 of Forbes’ top 50 global AI companies, including OpenAI, Anthropic, Meta and Google.

“The presence of such major AI developers makes California an attractive jurisdiction for advancing responsible AI policy,” the Brookings Institution said. Additionally, “California produces 14.5% of the United States GDP, and if it were a sovereign country, it would have the world’s fifth- or sixth-largest economy behind the U.S. (without California), China, Japan, Germany, and possibly India.”

Recently, California State Sen. Scott Wiener introduced the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, which aims to mitigate risks posed by advanced AI systems.

“Through its ambitious and broad provisions, this bill, if passed, could capture the promise of AI concentration in California,” the Brookings Institution wrote.

However, the article acknowledged potential limitations to California’s power.

“First, as with all regulations, there is the risk that the regulated companies will attempt to weaken or circumvent the policies impacting their business,” it noted.

EEOC Appoints Chief AI Officer

The Equal Employment Opportunity Commission (EEOC) has named Sivaram Ghorakavi as its deputy chief information officer and chief AI officer, becoming the latest federal agency to establish a role dedicated to overseeing the use of AI. This appointment comes in response to President Joe Biden’s executive order calling for the safe, secure and trustworthy development and use of AI across government agencies.

In his dual role, Ghorakavi will be responsible for coordinating intradepartmental and cross-agency efforts on AI and related issues, as well as leading EEOC innovations in technology that support the agency’s strategic mission.

“I see the Deputy Chief Information Officer as a leader who values creation and drives digital transformation and automation aligned to the EEOC’s critical mission of ensuring equal employment opportunity,” Ghorakavi said in a statement. “The success of this role depends on helping people understand both the benefits and risks of using technology, including AI, to ensure it is used in the right places at the right time and in the right amounts to enhance EEOC’s ability to operate effectively and to enforce key workplace protections.”

The EEOC’s move to appoint a chief AI officer reflects a growing trend among U.S. government agencies as they grapple with the safe and responsible use of AI. In recent months, the Department of Justice and the Commodity Futures Trading Commission have also established similar roles.

President Biden’s executive order directs government agencies to develop standards and guidance regarding AI systems, among other requirements, to ensure that the technology is developed and deployed in a manner that protects citizens’ rights.